Skip to content

[Relax][PyTorch] Fix scalar parameter inputs in Dynamo#18725

Merged
tlopex merged 2 commits intoapache:mainfrom
mshr-h:fix-dynamo-scalar-params
Feb 7, 2026
Merged

[Relax][PyTorch] Fix scalar parameter inputs in Dynamo#18725
tlopex merged 2 commits intoapache:mainfrom
mshr-h:fix-dynamo-scalar-params

Conversation

@mshr-h
Copy link
Contributor

@mshr-h mshr-h commented Feb 7, 2026

Ensure scalar parameter placeholders are forwarded to the Relax VM.
Fix a model reported in pytorch/pytorch#169188

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @mshr-h, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug where scalar parameters were not being properly recognized and passed to the Relax VM through the Dynamo frontend. The changes ensure that zero-dimensional torch.Tensor inputs are now correctly processed, resolving a reported issue and enhancing the compatibility and robustness of the PyTorch-Relax integration. Additionally, it includes a refinement in how constant expressions are handled during graph translation to improve type consistency.

Highlights

  • Scalar Parameter Input Fix: Resolved an issue where scalar torch.Tensor arguments were not correctly forwarded to the Relax VM when using the Dynamo frontend.
  • Constant Expression Handling: Refined the logic for handling constant expressions in binary operations within the graph translator to prevent incorrect type casting when both operands are constants.
  • New Test Case: Introduced a new test, test_relax_dynamo_scalar_params, to specifically validate the correct processing of scalar torch.nn.Parameters by the Relax Dynamo backend.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/tvm/relax/frontend/torch/base_fx_graph_translator.py
    • Modified call_binary_op to add explicit checks (and not isinstance(rhs, relax.expr.Constant)) when promoting types for binary operations involving relax.expr.Constant to avoid redundant or incorrect casting if both operands are already constants.
  • python/tvm/relax/frontend/torch/dynamo.py
    • Removed the if arg.dim() != 0: condition within exec_tvm, allowing scalar torch.Tensor arguments (tensors with zero dimensions) to be converted to TVM tensors and passed to the VM.
  • tests/python/relax/test_frontend_dynamo.py
    • Added test_relax_dynamo_scalar_params, a new test case that defines a torch.nn.Module with scalar parameters and verifies that torch.compile with relax_dynamo produces the correct output.
Activity
  • No human activity (comments, reviews, etc.) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses an issue with scalar parameter inputs in the PyTorch Dynamo frontend for Relax. The changes in dynamo.py ensure that scalar tensors are properly handled, and the new test case in test_frontend_dynamo.py effectively validates this fix. The modification in base_fx_graph_translator.py resolves a bug concerning binary operations on two constants. I have provided a suggestion to simplify this part of the code for better readability and conciseness. Overall, this is a solid improvement.

Comment on lines 506 to 512
if isinstance(lhs, relax.Var) or isinstance(rhs, relax.Var):
return call_binary_op(relax_op, lhs, rhs)
elif isinstance(lhs, relax.expr.Constant):
elif isinstance(lhs, relax.expr.Constant) and not isinstance(rhs, relax.expr.Constant):
return call_binary_op(relax_op, lhs, relax.const(rhs, dtype=lhs.struct_info.dtype))
elif isinstance(rhs, relax.expr.Constant):
elif isinstance(rhs, relax.expr.Constant) and not isinstance(lhs, relax.expr.Constant):
return call_binary_op(relax_op, relax.const(lhs, dtype=rhs.struct_info.dtype), rhs)
return intrinsic_op(lhs, rhs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While this change correctly fixes the issue with binary operations on two constants, the logic can be simplified. The call_binary_op function already contains promote_binary_op_args, which handles the promotion of Python scalars to Relax constants when one of the operands is a Relax expression. We can leverage this to make the code more concise and readable.

            if isinstance(lhs, relax.Expr) or isinstance(rhs, relax.Expr):
                return call_binary_op(relax_op, lhs, rhs)
            return intrinsic_op(lhs, rhs)

@mshr-h mshr-h changed the title [Relax][Dynamo] Fix scalar parameter inputs [Relax][PyTorch][Dynamo] Fix scalar parameter inputs Feb 7, 2026
@mshr-h mshr-h changed the title [Relax][PyTorch][Dynamo] Fix scalar parameter inputs [Relax][PyTorch] Fix scalar parameter inputs in Dynamo Feb 7, 2026
@mshr-h mshr-h marked this pull request as ready for review February 7, 2026 16:49
@tlopex tlopex merged commit 9eac0e1 into apache:main Feb 7, 2026
16 checks passed
@mshr-h mshr-h deleted the fix-dynamo-scalar-params branch February 8, 2026 02:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants